3 |
Evaluation of Unsupervised Automatic Readability Assessors Using Rank Correlations ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Analysis of Language Change in Collaborative Instruction Following ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Learning Feature Weights using Reward Modeling for Denoising Parallel Corpora ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Cross-lingual Aspect-based Sentiment Analysis with Aspect Term Code-Switching ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Cross-lingual Transfer for Text Classification with Dictionary-based Heterogeneous Graph ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset ...
|
|
|
|
BASE
|
|
Show details
|
|
10 |
An Unsupervised Method for Building Sentence Simplification Corpora in Multiple Languages ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
SD-QA: Spoken Dialectal Question Answering for the Real World ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Plan-then-Generate: Controlled Data-to-Text Generation via Planning ...
|
|
|
|
BASE
|
|
Show details
|
|
14 |
Sparsity and Sentence Structure in Encoder-Decoder Attention of Summarization Systems ...
|
|
|
|
Abstract:
Anthology paper link: https://aclanthology.org/2021.emnlp-main.739/ Abstract: Transformer models have achieved state-of-the-art results in a wide range of NLP tasks including summarization. Training and inference using large transformer models can be computationally expensive. Previous work has focused on one important bottleneck, the quadratic self-attention mechanism in the encoder. Modified encoder architectures such as LED or LoBART use local attention patterns to address this problem for summarization. In contrast, this work focuses on the transformer's encoder-decoder attention mechanism. The cost of this attention becomes more significant in inference or training approaches that require model-generated histories. First, we examine the complexity of the encoder-decoder attention. We demonstrate empirically that there is a sparse sentence structure in document summarization that can be exploited by constraining the attention mechanism to a subset of input sentences, whilst maintaining system ...
|
|
URL: https://dx.doi.org/10.48448/t5yn-e724 https://underline.io/lecture/37392-sparsity-and-sentence-structure-in-encoder-decoder-attention-of-summarization-systems
|
|
BASE
|
|
Hide details
|
|
15 |
Identity-Based Patterns in Deep Convolutional Networks: Generative Adversarial Phonology and Reduplication ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Live Session - 4E: Phonology, Morphology and Word Segmentation ...
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Rule-based Morphological Inflection Improves Neural Terminology Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Translating Headers of Tabular Data: A Pilot Study of Schema Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|